Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
EMBO Rep ; 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654122

RESUMO

Ribosome biogenesis is initiated in the nucleolus, a multiphase biomolecular condensate formed by liquid-liquid phase separation. The nucleolus is a powerful disease biomarker and stress biosensor whose morphology reflects function. Here we have used digital holographic microscopy (DHM), a label-free quantitative phase contrast microscopy technique, to detect nucleoli in adherent and suspension human cells. We trained convolutional neural networks to detect and quantify nucleoli automatically on DHM images. Holograms containing cell optical thickness information allowed us to define a novel index which we used to distinguish nucleoli whose material state had been modulated optogenetically by blue-light-induced protein aggregation. Nucleoli whose function had been impacted by drug treatment or depletion of ribosomal proteins could also be distinguished. We explored the potential of the technology to detect other natural and pathological condensates, such as those formed upon overexpression of a mutant form of huntingtin, ataxin-3, or TDP-43, and also other cell assemblies (lipid droplets). We conclude that DHM is a powerful tool for quantitatively characterizing nucleoli and other cell assemblies, including their material state, without any staining.

2.
Comput Biol Med ; 148: 105932, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35964469

RESUMO

High-resolution non-destructive 3D microCT imaging allows the visualization and structural characterization of mineralized cartilage and bone. Deriving statistically relevant quantitative structural information about these tissues, however, requires automated segmentation procedures, mainly because manual contouring is user-biased and time-consuming. Despite the increased spatial resolution in microCT 3D volumes, automatic segmentation of mineralized cartilage versus bone remains non-trivial since they have similar grayscale values. Our work investigates how reliable 2D segmentation masks can be predicted automatically based on a (set of) convolutional neural network(s) trained with a limited number of manually annotated samples. To do that, we compared different strategies to select the 2D samples to annotate and considered ensemble learning and test-time augmentation (TTA) to mitigate the limited accuracy and robustness resulting from the small number of annotated training samples. We show that, for a fixed amount of annotated image samples, 2D microCT slices to annotate should preferably be selected in distinct 3D volumes, at regular intervals, rather than being grouped in adjacent slices of a same 3D volume. Two main lessons are drawn regarding the use of ensembles or TTA instead of a single model. First, ensemble learning is shown to improve segmentation accuracy and to reduce the mean and standard deviation of the absolute errors in cartilage characteristics obtained with different initializations of the neural network training process. In contrast, TTA appears to be unable to improve the model's robustness to unlucky initializations. Second, both TTA and ensembling improved the model's confidence in its predictions and segmentation failure detection.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Cartilagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Microtomografia por Raio-X
3.
Artigo em Inglês | MEDLINE | ID: mdl-32340946

RESUMO

We introduce an effective fusion-based technique to enhance both day-time and night-time hazy scenes. When inverting the Koschmieder light transmission model, and by contrast with the common implementation of the popular dark-channel DehazeHeCVPR2009, we estimate the airlight on image patches and not on the entire image. Local airlight estimation is adopted because, under night-time conditions, the lighting generally arises from multiple localized artificial sources, and is thus intrinsically non-uniform. Selecting the sizes of the patches is, however, non-trivial. Small patches are desirable to achieve fine spatial adaptation to the atmospheric light, but large patches help improve the airlight estimation accuracy by increasing the possibility of capturing pixels with airlight appearance (due to severe haze). For this reason, multiple patch sizes are considered to generate several images, that are then merged together. The discrete Laplacian of the original image is provided as an additional input to the fusion process to reduce the glowing effect and to emphasize the finest image details. Similarly, for day-time scenes we apply the same principle but use a larger patch size. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. Extensive experimental results demonstrate the effectiveness of our approach as compared with recent techniques, both in terms of computational efficiency and the quality of the outputs.

4.
Artigo em Inglês | MEDLINE | ID: mdl-31751271

RESUMO

This article introduces a novel solution to improve image enhancement in terms of color appearance. Our approach, called Color Channel Compensation (3C), overcomes artifacts resulting from the severely non-uniform color spectrum distribution encountered in images captured under hazy night-time conditions, underwater, or under non-uniform artificial illumination. Our solution is founded on the observation that, under such adverse conditions, the information contained in at least one color channel is close to completely lost, making the traditional enhancing techniques subject to noise and color shifting. In those cases, our pre-processing method proposes to reconstruct the lost channel based on the opponent color channel. Our algorithm subtracts a local mean from each opponent color pixel. Thereby, it partly recovers the lost color from the two colors (red-green or blue-yellow) involved in the opponent color channel. The proposed approach, whilst simple, is shown to consistently improve the outcome of conventional restoration methods. To prove the utility of our 3C operator, we provide an extensive qualitative and quantitative evaluation for white balancing, image dehazing, and underwater enhancement applications.

5.
Nat Protoc ; 13(10): 2387-2406, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30250292

RESUMO

Ribosome biogenesis is initiated in the nucleolus, a cell condensate essential to gene expression, whose morphology informs cancer pathologists on the health status of a cell. Here, we describe a protocol for assessing, both qualitatively and quantitatively, the involvement of trans-acting factors in the nucleolar structure. The protocol involves use of siRNAs to deplete cells of factors of interest, fluorescence imaging of nucleoli in an automated high-throughput platform, and use of dedicated software to determine an index of nucleolar disruption, the iNo score. This scoring system is unique in that it integrates the five most discriminant shape and textural features of the nucleolus into a parametric equation. Determining the iNo score enables both qualitative and quantitative factor classification with prediction of function (functional clustering), which to our knowledge is not achieved by competing approaches, as well as stratification of their effect (severity of defects) on nucleolar structure. The iNo score has the potential to be useful in basic cell biology (nucleolar structure-function relationships, mitosis, and senescence), developmental and/or organismal biology (aging), and clinical practice (cancer, viral infection, and reproduction). The entire protocol can be completed within 1 week.


Assuntos
Nucléolo Celular/patologia , Nucléolo Celular/ultraestrutura , Imagem Óptica/métodos , Nucléolo Celular/genética , Senescência Celular , Células HeLa , Ensaios de Triagem em Larga Escala/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Interfase , Mitose , Neoplasias/diagnóstico , Neoplasias/patologia , Proteínas Nucleares/análise , Proteínas Nucleares/genética , Interferência de RNA , RNA Interferente Pequeno/genética , Software
6.
IEEE Trans Image Process ; 27(1): 379-393, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28981416

RESUMO

We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching.

7.
IEEE Trans Image Process ; 26(11): 5477-5490, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28783631

RESUMO

We consider the synthesis of intermediate views of an object captured by two widely spaced and calibrated cameras. This problem is challenging because foreshortening effects and occlusions induce significant differences between the reference images when the cameras are far apart. That makes the association or disappearance/appearance of their pixels difficult to estimate. Our main contribution lies in disambiguating this ill-posed problem by making the interpolated views consistent with a plausible transformation of the object silhouette between the reference views. This plausible transformation is derived from an object-specific prior that consists of a nonlinear shape manifold learned from multiple previous observations of this object by the two reference cameras. The prior is used to estimate the evolution of the epipolar silhouette segments between the reference views. This information directly supports the definition of epipolar silhouette segments in the intermediate views, as well as the synthesis of textures in those segments. It permits to reconstruct the epipolar plane images (EPIs) and the continuum of views associated with the EPI volume, obtained by aggregating the EPIs. Experiments on synthetic and natural images show that our method preserves the object topology in intermediate views and deals effectively with the self-occluded regions and the severe foreshortening effect associated with wide-baseline camera configurations.

8.
IEEE Trans Pattern Anal Mach Intell ; 39(1): 61-74, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26915115

RESUMO

Given a set of detections, detected at each time instant independently, we investigate how to associate them across time. This is done by propagating labels on a set of graphs, each graph capturing how either the spatio-temporal or the appearance cues promote the assignment of identical or distinct labels to a pair of detections. The graph construction is motivated by a locally linear embedding of the detection features. Interestingly, the neighborhood of a node in appearance graph is defined to include all the nodes for which the appearance feature is available (even if they are temporally distant). This gives our framework the uncommon ability to exploit the appearance features that are available only sporadically. Once the graphs have been defined, multi-object tracking is formulated as the problem of finding a label assignment that is consistent with the constraints captured each graph, which results into a difference of convex (DC) program. We propose to decompose the global objective function into node-wise sub-problems. This not only allows a computationally efficient solution, but also supports an incremental and scalable construction of the graph, thereby making the framework applicable to large graphs and practical tracking scenarios. Moreover, it opens the possibility of parallel implementation.

9.
IEEE Trans Image Process ; 26(1): 65-78, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27810821

RESUMO

Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

10.
Nat Commun ; 7: 11390, 2016 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-27265389

RESUMO

The nucleolus is a potent disease biomarker and a target in cancer therapy. Ribosome biogenesis is initiated in the nucleolus where most ribosomal (r-) proteins assemble onto precursor rRNAs. Here we systematically investigate how depletion of each of the 80 human r-proteins affects nucleolar structure, pre-rRNA processing, mature rRNA accumulation and p53 steady-state level. We developed an image-processing programme for qualitative and quantitative discrimination of normal from altered nucleolar morphology. Remarkably, we find that uL5 (formerly RPL11) and uL18 (RPL5) are the strongest contributors to nucleolar integrity. Together with the 5S rRNA, they form the late-assembling central protuberance on mature 60S subunits, and act as an Hdm2 trap and p53 stabilizer. Other major contributors to p53 homeostasis are also strictly late-assembling large subunit r-proteins essential to nucleolar structure. The identification of the r-proteins that specifically contribute to maintaining nucleolar structure and p53 steady-state level provides insights into fundamental aspects of cell and cancer biology.


Assuntos
Nucléolo Celular/química , Nucléolo Celular/metabolismo , Proteínas Ribossômicas/metabolismo , Proteína Supressora de Tumor p53/metabolismo , Nucléolo Celular/genética , Humanos , RNA Ribossômico 5S/química , RNA Ribossômico 5S/metabolismo , Proteínas Ribossômicas/química , Proteínas Ribossômicas/genética , Proteína Supressora de Tumor p53/genética
11.
IEEE Trans Image Process ; 20(9): 2636-49, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21411407

RESUMO

In this paper, we address the issues of analyzing and classifying JPEG 2000 code-streams. An original representation, called integral volume, is first proposed to compute local image features progressively from the compressed code-stream, on any spatial image area, regardless of the code-blocks borders. Then, a JPEG 2000 classifier is presented that uses integral volumes to learn an ensemble of randomized trees. Several classification tasks are performed on various JPEG 2000 image databases and results are in the same range as the ones obtained in the literature with noncompressed versions of these databases. Finally, a cascade of such classifiers is considered, in order to specifically address the image retrieval issue, i.e., bi-class problems characterized by a highly skewed distribution. An efficient way to learn and optimize such cascade is proposed. We show that staying in a JPEG 2000 framework, initially seen as a constraint to avoid heavy decoding operations, is actually an advantage as it can benefit from the multiresolution and multilayer paradigms inherently present in this compression standard. In particular, unlike other existing cascaded retrieval systems, the features used along our cascade are increasingly discriminant and lead therefore to a better tradeoff of complexity versus performance.

12.
IEEE Trans Image Process ; 16(5): 1339-54, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17491464

RESUMO

This paper considers the issues of scheduling and caching JPEG2000 data in client/server interactive browsing applications, under memory and channel bandwidth constraints. It analyzes how the conveyed data have to be selected at the server and managed within the client cache so as to maximize the reactivity of the browsing application. Formally, to render the dynamic nature of the browsing session, we assume the existence of a reaction model that defines when the user launches a novel command as a function of the image quality displayed at the client. As a main outcome, our work demonstrates that, due to the latency inherent to client/server exchanges, a priori expectation about future navigation commands may help to improve the overall reactivity of the system. In our study, the browsing session is defined by the evolution of a rectangular window of interest (WoI) along the time. At any given time, the WoI defines the position and the resolution of the image data to display at the client. The expectation about future navigation commands is then formalized based on a stochastic navigation model, which defines the probability that a given WoI is requested next, knowing previous WoI requests. Based on that knowledge, several scheduling scenarios are considered. The first scenario is conventional and transmits all the data corresponding to the current WoI before prefetching the most promising data outside the current WoI. Alternative scenarios are then proposed to anticipate prefetching, by scheduling data expected to be requested in the future before all the current WoI data have been sent out. Our results demonstrate that, for predictable navigation commands, anticipated prefetching improves the overall reactivity of the system by up to 30% compared to the conventional scheduling approach. They also reveal that an accurate knowledge of the reaction model is not required to get these significant improvements.


Assuntos
Algoritmos , Artefatos , Redes de Comunicação de Computadores , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Análise Numérica Assistida por Computador
13.
IEEE Trans Image Process ; 12(10): 1226-42, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-18237889

RESUMO

This paper provides a precise analytical study of the selection and modulus quantization of matching pursuit (MP) coefficients. We demonstrate that an optimal rate-distortion trade-off is achieved by selecting the atoms up to a quality-dependent threshold, and by defining the modulus quantizer in terms of that threshold. In doing so, we take into account quantization error re-injection resulting from inserting the modulus quantizer inside the MP atom computation loop. In-loop quantization not only improves coding performance, but also affects the optimal quantizer design for both uniform and nonuniform quantization. We measure the impact of our work in the context of video coding. For both uniform and nonuniform quantization, the precise understanding of the relation between atom selection and quantization results in significant improvements in terms of coding efficiency. At high bitrates, the proposed nonuniform quantization scheme results in 0.5 to 2 dB improvement over the previous method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...